9 research outputs found

    Hyperparameter optimisation in differential evolution using Summed Local Difference Strings, a rugged but easily calculated landscape for combinatorial search problems

    Get PDF
    AbstractWe analyse the effectiveness of differential evolution hyperparameters in large-scale search problems, i.e. those with very many variables or vector elements, using a novel objective function that is easily calculated from the vector/string itself. The objective function is simply the sum of the differences between adjacent elements. For both binary and real-valued elements whose smallest and largest values are min and max in a vector of length N, the value of the objective function ranges between 0 and(N-1) × (max-min)and can thus easily be normalised if desired. This provides for a conveniently rugged landscape. Using this we assess how effectively search varies with both the values of fixed hyperparameters for Differential Evolution and the string length. String length, population size and generations for computational iterations have been studied. Finally, a neural network is trained by systematically varying three hyper-parameters, viz population (NP), mutation factor (F) and crossover rate (CR), and two output target variables are collected (a) median and (b) maximum cost function values from 10-trial experiments. This neural system is then tested on an extended range of data points generated by varying the three parameters on a finer scale to predict bothmedianandmaximumfunction costs. The results obtained from the machine learning model have been validated with actual runs using Pearson’s coefficient to justify the reliability to motivate the use of machine learning techniques over grid search for hyper-parameter search for numerical optimisation algorithms. The performance has also been compared with SMAC3 and OPTUNA in addition to grid search and random search.</jats:p

    Deep learning-based explainable target classification for synthetic aperture radar images

    Get PDF
    —Deep learning has been extensively useful for its ability to mimic the human brain to make decisions. It is able to extract features automatically and train the model for classification and regression problems involved with complex images databases. This paper presents the image classification using Convolutional Neural Network (CNN) for target recognition using Synthetic-aperture Radar (SAR) database along with Explainable Artificial Intelligence (XAI) to justify the obtained results. In this work, we experimented with various CNN architectures on the MSTAR dataset, which is a special type of SAR images. Accuracy of target classification is almost 98.78% for the underlying preprocessed MSTAR database with given parameter options in CNN. XAI has been incorporated to explain the justification of test images by marking the decision boundary to reason the region of interest. Thus XAI based image classification is a robust prototype for automatic and transparent learning system while reducing the semantic gap between soft-computing and humans way of perception

    Explaining Machine Learning-based Classifications of in-vivo Gastral Images

    No full text
    This paper proposes an explainable machine learning tool that can potentially be used for decision support in medical image analysis scenarios. For a decision-support system it is important to be able to reverse-engineer the impact of features on the final decision outcome. In the medical domain, such functionality is typically required to allow applying machine learning to clinical decision making. In this paper, we present initial experiments that have been performed on in-vivo gastral images obtained from capsule endoscopy. Quantitative analysis has been performed to evaluate the utility of the proposed method. Convolutional neural networks have been used for training the validating of the image data set to provide the bleeding classifications. The visual explanations have been provided in the images to help health professionals trust the black box predictions. While the paper focuses on the in-vivo gastral image use case, most findings are generalizable.Peer reviewe
    corecore